Distributionally Adversarial Attack
نویسندگان
چکیده
منابع مشابه
Learning to Attack: Adversarial Transformation Networks
With the rapidly increasing popularity of deep neural networks for image recognition tasks, a parallel interest in generating adversarial examples to attack the trained models has arisen. To date, these approaches have involved either directly computing gradients with respect to the image pixels or directly solving an optimization on the image pixels. We generalize this pursuit in a novel direc...
متن کاملASP: A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction
With the excellent accuracy and feasibility, the Neural Networks (NNs) have been widely applied into the novel intelligent applications and systems. However, with the appearance of the Adversarial Attack, the NN based system performance becomes extremely vulnerable: the image classification results can be arbitrarily misled by the adversarial examples, which are crafted images with human unperc...
متن کاملTactics of Adversarial Attack on Deep Reinforcement Learning Agents
We introduce two tactics, namely the strategicallytimed attack and the enchanting attack, to attack reinforcement learning agents trained by deep reinforcement learning algorithms using adversarial examples. In the strategically-timed attack, the adversary aims at minimizing the agent’s reward by only attacking the agent at a small subset of time steps in an episode. Limiting the attack activit...
متن کاملAdversarial Label Flips Attack on Support Vector Machines
To develop a robust classification algorithm in the adversarial setting, it is important to understand the adversary’s strategy. We address the problem of label flips attack where an adversary contaminates the training set through flipping labels. By analyzing the objective of the adversary, we formulate an optimization framework for finding the label flips that maximize the classification erro...
متن کاملAttack Strength vs. Detectability Dilemma in Adversarial Machine Learning
As the prevalence and everyday use of machine learning algorithms, along with our reliance on these algorithms grow dramatically, so do the efforts to attack and undermine these algorithms with malicious intent, resulting in a growing interest in adversarial machine learning. A number of approaches have been developed that can render a machine learning algorithm ineffective through poisoning or...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence
سال: 2019
ISSN: 2374-3468,2159-5399
DOI: 10.1609/aaai.v33i01.33012253